8 research outputs found

    Robot-assisted Soil Apparent Electrical Conductivity Measurements in Orchards

    Full text link
    Soil apparent electrical conductivity (ECa) is a vital metric in Precision Agriculture and Smart Farming, as it is used for optimal water content management, geological mapping, and yield prediction. Several existing methods seeking to estimate soil electrical conductivity are available, including physical soil sampling, ground sensor installation and monitoring, and the use of sensors that can obtain proximal ECa estimates. However, such methods can be either very laborious and/or too costly for practical use over larger field canopies. Robot-assisted ECa measurements, in contrast, may offer a scalable and cost-effective solution. In this work, we present one such solution that involves a ground mobile robot equipped with a customized and adjustable platform to hold an Electromagnetic Induction (EMI) sensor to perform semi-autonomous and on-demand ECa measurements under various field conditions. The platform is designed to be easily re-configurable in terms of sensor placement; results from testing for traversability and robot-to-sensor interference across multiple case studies help establish appropriate tradeoffs for sensor placement. Further, a developed simulation software package enables rapid and accessible estimation of terrain traversability in relation to desired EMI sensor placement. Extensive experimental evaluation across different fields demonstrates that the obtained robot-assisted ECa measurements are of high linearity compared with the ground truth (data collected manually by a handheld EMI sensor) by scoring more than 90%90\% in Pearson correlation coefficient in both plot measurements and estimated ECa maps generated by kriging interpolation. The proposed robotic solution supports autonomous behavior development in the field since it utilizes the ROS navigation stack along with the RTK GNSS positioning data and features various ranging sensors.Comment: 15 pages, 16 figure

    Centroid Distance Keypoint Detector for Colored Point Clouds

    Full text link
    Keypoint detection serves as the basis for many computer vision and robotics applications. Despite the fact that colored point clouds can be readily obtained, most existing keypoint detectors extract only geometry-salient keypoints, which can impede the overall performance of systems that intend to (or have the potential to) leverage color information. To promote advances in such systems, we propose an efficient multi-modal keypoint detector that can extract both geometry-salient and color-salient keypoints in colored point clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an intuitive and effective saliency measure, the centroid distance, that can be used in both 3D space and color space, and a multi-modal non-maximum suppression algorithm that can select keypoints with high saliency in two or more modalities. The proposed saliency measure leverages directly the distribution of points in a local neighborhood and does not require normal estimation or eigenvalue decomposition. We evaluate the proposed method in terms of repeatability and computational efficiency (i.e. running time) against state-of-the-art keypoint detectors on both synthetic and real-world datasets. Results demonstrate that our proposed CED keypoint detector requires minimal computational time while attaining high repeatability. To showcase one of the potential applications of the proposed method, we further investigate the task of colored point cloud registration. Results suggest that our proposed CED detector outperforms state-of-the-art handcrafted and learning-based keypoint detectors in the evaluated scenes. The C++ implementation of the proposed method is made publicly available at https://github.com/UCR-Robotics/CED_Detector.Comment: Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023; copyright will be transferred to IEEE upon publicatio

    A Novel UAV-Assisted Positioning System for GNSS-Denied Environments

    No full text
    Global Navigation Satellite Systems (GNSS) are extensively used for location-based services, civil and military applications, precise time reference, atmosphere sensing, and other applications. In surveying and mapping applications, GNSS provides precise three-dimensional positioning all over the globe, day and night, under almost any weather conditions. The visibility of the ground receiver to GNSS satellites constitutes the main driver of accuracy for GNSS positioning. When this visibility is obstructed by buildings, high vegetation, or steep slopes, the accuracy is degraded and alternative techniques have to be assumed. In this study, a novel concept of using an unmanned aerial system (UAS) as an intermediate means for improving the accuracy of ground positioning in GNSS-denied environments is presented. The higher elevation of the UAS provides a clear-sky visibility line towards the GNSS satellites, thus its accuracy is significantly enhanced with respect to the ground GNSS receiver. Thus, the main endeavor is to transfer the order of accuracy of the GNSS on-board the UAS to the ground. The general architecture of the proposed system includes hardware and software components (i.e., camera, gimbal, range finder) for the automation of the procedure. The integration of the coordinate systems for each payload setting is described, while an error budget analysis is carried out to evaluate and identify the system’s critical elements along with the potential of the proposed method

    Aerial and Ground Robot Collaboration for Autonomous Mapping in Search and Rescue Missions

    No full text
    Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems to assist in searching and even rescuing individuals. In this study, we present a synchronous ground-aerial robot collaboration approach, under which an Unmanned Aerial Vehicle (UAV) and a humanoid robot solve a Search and Rescue scenario locally, without the aid of a commonly used Global Navigation Satellite System (GNSS). Specifically, the UAV uses a combination of Simultaneous Localization and Mapping and OctoMap approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the humanoid robot. The humanoid robot receives a goal position in the created map and executes a path planning algorithm in order to estimate the FootStep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map while using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with sensor observations from the UAV. Finally, the humanoid robot performs visual human body detection while using camera data through a Darknet pre-trained neural network. The proposed robot collaboration scheme has been tested under a proof of concept setting in an exterior GNSS-denied environment

    Μηχανική μάθηση για την ενίσχυση της ρομποτικής αντίληψης και ελέγχου

    No full text
    Summarization: In recent years, there is an emerging need to use robotic systems to facilitate human missions, especially in Search-and-Rescue scenarios. Such systems may operate in cluttered and human-unfriendly environments, in which there may be no ideal circumstances to establish a remote control connection and delays may be detrimental due to the emergencyof such scenarios. Henceforth, the most essential trait of these systems is their ability to deal with the uncertainty of the operating environment, in order to make appropriate decisions and accomplish their objectives autonomously. In this thesis, we utilize Machine Learning approaches to enhance robotic perception and control, namely visionand navigation, of a simulated Unmanned Aerial Vehicle (UAV) to be able to act fully autonomously in reconnaissance and rescue procedures. On the perception side, we use a custom Deconvolutional Neural Network trained on tailor-made loss functions to achieve autonomous visual target detection. On the control side, we applied Deep Reinforcement Learning using Deep Deterministic Policy Gradient, based on a custom lightweight training simulator, to obtain the appropriate autonomous navigation behavior in unknown worlds. The enhanced UAV system can navigate safely through an unknown environment, search and detect any existing humans in its surroundings with its onboard gimbasystem, engage and take distance measurements from the acquired target, and georeference it to the global coordinate system. Thenceforth, the UAV pinpoints the positioned target in the generated map, shares it with the responding team, and proceeds with the exploration of the unmapped area to locate other individuals who may be in need. Throughout this study, each developed autonomous behavior of the UAV was thoroughly evaluated to demonstrate experimental results in various custom environments within the Gazebo robot simulation environment. The proposed system has been developed as a Robot Operating System (ROS) package and is deployable to both simulated and real UAV systems, as long as they meet the minimum proposed software and sensory requirements.Περίληψη: Τα τελευταία χρόνια, υπάρχει μια αναδυόμενη ανάγκη χρήσης ρομποτικών συστημάτων για τη διευκόλυνση ανθρώπινων αποστολών, ειδικά σε σενάρια έρευνας και διάσωσης. Τέτοια συστήματα μπορεί να λειτουργούν σε ακατάστατα και μη-φιλικά προς τον άνθρωπο περιβάλλοντα, στα οποία ενδέχεται να μην υπάρχουν ιδανικές συνθήκες για τη δημιουργία απομακρυσμένης σύνδεσης τηλεχειρισμού και οι καθυστερήσεις μπορεί να είναι επιζήμιες λόγω της κρισιμότητας του σεναρίου. Επομένως, το πιο βασικό χαρακτηριστικό αυτών των συστημάτων είναι η ικανότητά τους να αντιμετωπίζουν την αβεβαιότητα του περιβάλλοντος όπου ενεργούν, προκειμένου να λαμβάνουν τις κατάλληλες αποφάσεις για να επιτυγχάνουν αυτόνομα τους στόχους τους. Στην παρούσα διατριβή, χρησιμοποιούμε προσεγγίσεις Μηχανικής Μάθησης (Machine Learning) για την ενίσχυση της ρομποτικής αντίληψης και ελέγχου, συγκεκριμένα της όρασης και της πλοήγησης, για ένα προσομοιωμένο μη-επανδρωμένο εναέριο όχημα (Unmanned Aerial Vehicle - UAV) που θα μπορεί να ενεργεί πλήρως αυτόνομα στις διαδικασίες αναζήτησης και διάσωσης. Στην πλευρά της αντίληψης, χρησιμοποιούμε ένα προσαρμοσμένο Deconvolutional Neural Network, εκπαιδευμένο πάνω σε ειδικά σχεδιασμένες συναρτήσεις κόστους, για να πετύχουμε αυτόνομο εντοπισμό οπτικού στόχου. Στην πλευρά του ελέγχου, εφαρμόσαμε Βαθιά Ενισχυτική Μάθηση (Deep Reinforcement Learning) χρησιμοποιώντας Deep Deterministic Policy Gradient, πάνω σε ένα ειδικά διαμορφωμένο και χαμηλής υπολογιστικής πολυπλοκότητας περιβάλλον εκπαίδευσης, για να αποκτήσουμε την κατάλληλη αυτόνομη συμπεριφορά για πλοήγηση σε άγνωστους κόσμους. Το βελτιωμένο σύστημα UAV μπορεί να πλοηγηθεί με ασφάλεια μέσα στο άγνωστο περιβάλλον, να αναζητήσει και να εντοπίσει ανθρώπους στη γύρω περιοχή με το ενσωματωμένο σύστημα gimbal, να εστιάσει και να λάβει μετρήσεις απόστασης από τον αναγνωρισμένο στόχο και να τον αναφέρει στο καθολικό σύστημα συντεταγμένων. Στη συνέχεια, το UAV εντοπίζει τον προσδιορισμένο στόχο στον παραγόμενο χάρτη της άγνωστης περιοχής, τον μοιράζεται με την διασωστική ομάδα και προχωρά στην εξερεύνηση της μη-χαρτογραφημένης περιοχής για να εντοπίσει άλλα άτομα που βρίσκονται σε κίνδυνο. Σε όλη τη διάρκεια της εργασίας, κάθε ανεπτυγμένη αυτόνομη συμπεριφορά του UAV αξιολογήθηκε διεξοδικά για να επιδείξει πειραματικά αποτελέσματα σε διάφορα προσαρμοσμένα περιβάλλοντα στο περιβάλλον προσομοίωσης ρομπότ Gazebo. Το προτεινόμενο σύστημα έχει αναπτυχθεί ως πακέτο Robot Operating System (ROS) και μπορεί να εφαρμοστεί τόσο σε προσομοιωμένα όσο και σε πραγματικά συστήματα, εφόσον πληρούν τις ελάχιστες προτεινόμενες απαιτήσεις λογισμικού και αισθητηρίων

    Συνεργασία εναέριων και επίγειων ρομπότ για αυτόνομη χαρτογράφηση σε αποστολές έρευνας και διάσωσης

    No full text
    Summarization: Nowadays, Humanitarian Crisis scenarios occur on daily basis and typically require immediate rescue intervention. In most cases, the scene conditions may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human-unfriendly situations. Those scenarios are ideal for autonomous mobile robot systems to act on, by searching and even rescuing individuals, therefore enhancing rescuers' actions and keeping them safe. In this thesis, we present a ground-aerial robot collaboration approach, in which a quadcopter and a humanoid robot solve a search-and-rescue scenario locally, without any GNSS/GPS system dependencies. Specifically, the quadcopter uses a combination of Simultaneous Localization and Mapping (SLAM) and OctoMapping approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the height of the humanoid robot. At the same time, the quadcopter searches for the humanoid robot in the field and localizes it in the map frame. The humanoid robot awaits for a goal position in the created map and executes a path planning algorithm to estimate the footstep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with spatial observations from the quadcopter. Finally, the humanoid robot performs visual human body detection using camera data through a Darknet pre-trained neural network. The entire project has been implemented within the Robot Operating System (ROS) and is available as an open source package. The proposed robot collaboration scheme has been tested both in interior and exterior physical environments under real-time conditions. The main advantage of the proposed scheme is the joint-ability to perceive the unknown scene from the air using the quadcopter, while at the same time performing close inspections on the ground using the humanoid robot.Περίληψη: Στις μέρες μας, καθημερινά συμβαίνουν σενάρια ανθρωπιστικής κρίσης και τυπικά χρήζουν άμεσης διασωστικής επέμβασης. Στις περισσότερες περιπτώσεις, οι συνθήκες αντιμετώπισης μπορεί να είναι απαγορευτικές για τους διασώστες στη παροχή άμεσης βοήθειας, εξαιτίας των επιβλαβών, απρόβλεπτων και μη φιλικών προς τον άνθρωπο καταστάσεων. Τα σενάρια αυτά είναι ιδανικά για τη δράση αυτόνομων κινητών ρομποτικών συστημάτων, καθώς μπορούν να αναζητήσουν και να διασώσουν ανθρώπους σε ανάγκη, ενισχύοντας έτσι το έργο των διασωστών, κρατώντας τους ασφαλείς. Η παρούσα διπλωματική εργασία παρουσιάζει μια προσέγγιση συνεργασίας ενός επίγειου και ενός εναέριου ρομπότ, στην οποία ένα τετρακόπτερο και ένα ανθρωποειδές ρομπότ αντιμετωπίζουν τοπικά ένα σενάριο έρευνας και διάσωσης, χωρίς εξάρτηση από σύστημα GNSS/GPS. Ακριβέστερα, το τετρακόπτερο χρησιμοποιεί τον συνδυασμό των Simultaneous Localization and Mapping (SLAM) και OctoMapping μεθόδων για να εξάγει ένα 2.5D occupancyd χάρτη της άγνωστης περιοχής σε σχέση με το ύψος του ανθρωποειδούς ρομπότ. Ταυτοχρόνως, το τετρακόπτερο αναζητεί το ανθρωποειδές ρομπότ μέσα στον χώρο και προσδιορίζει τη θέση του μέσα στο σύστημα συντεταγμένων του χάρτη. Το ανθρωποειδές ρομπότ αναμένει μια επιθυμητή θέση ως στόχο μέσα στον χάρτη και εκτελεί έναν αλγόριθμο path planning για να οργανώσει την διαδρομή του στον χώρο, κάνοντας την εκτίμηση τοποθέτησης των βημάτων και πατημάτων του μέχρι να φτάσει στον τελικό στόχο. Τέλος, το ανθρωποειδές ρομπότ πραγματοποιεί οπτική αναγνώριση ανθρώπων χρησιμοποιώντας ένα προ-εκπαιδευμένο νευρωνικό δίκτυο Darknet πάνω στα δεδομένα των καμερών του. Η συνολική εργασία έχει υλοποιηθεί μέσω του Robot Operating System (ROS) και είναι διαθέσιμη ως πακέτο ανοιχτού κώδικα. Η προτεινόμενη προσέγγιση συνεργασίας έχει δοκιμαστεί σε εσωτερικά, αλλά και εξωτερικά, περιβάλλοντα, σε συνθήκες πραγματικού χρόνου. Το βασικό πλεονέκτημα της προτεινόμενης προσέγγισης είναι η συνδυαστική ικανότητα αντίληψης μιας άγνωστης περιοχής από αέρος με τη χρήση του τετρακόπτερου, παράλληλα με την ικανότητα κοντινών παρατηρήσεων στο έδαφος χρησιμοποιώντας ένα ανθρωποειδές ρομπότ

    Aerial and ground robot collaboration for autonomous mapping in search and rescue missions

    No full text
    Summarization: Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems to assist in searching and even rescuing individuals. In this study, we present a synchronous ground-aerial robot collaboration approach, under which an Unmanned Aerial Vehicle (UAV) and a humanoid robot solve a Search and Rescue scenario locally, without the aid of a commonly used Global Navigation Satellite System (GNSS). Specifically, the UAV uses a combination of Simultaneous Localization and Mapping and OctoMap approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the humanoid robot. The humanoid robot receives a goal position in the created map and executes a path planning algorithm in order to estimate the FootStep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map while using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with sensor observations from the UAV. Finally, the humanoid robot performs visual human body detection while using camera data through a Darknet pre-trained neural network. The proposed robot collaboration scheme has been tested under a proof of concept setting in an exterior GNSS-denied environment.Presented on: Drone

    A novel UAV-assisted positioning system for GNSS-denied environments

    No full text
    Summarization: Global Navigation Satellite Systems (GNSS) are extensively used for location-based services, civil and military applications, precise time reference, atmosphere sensing, and other applications. In surveying and mapping applications, GNSS provides precise three-dimensional positioning all over the globe, day and night, under almost any weather conditions. The visibility of the ground receiver to GNSS satellites constitutes the main driver of accuracy for GNSS positioning. When this visibility is obstructed by buildings, high vegetation, or steep slopes, the accuracy is degraded and alternative techniques have to be assumed. In this study, a novel concept of using an unmanned aerial system (UAS) as an intermediate means for improving the accuracy of ground positioning in GNSS-denied environments is presented. The higher elevation of the UAS provides a clear-sky visibility line towards the GNSS satellites, thus its accuracy is significantly enhanced with respect to the ground GNSS receiver. Thus, the main endeavor is to transfer the order of accuracy of the GNSS on-board the UAS to the ground. The general architecture of the proposed system includes hardware and software components (i.e., camera, gimbal, range finder) for the automation of the procedure. The integration of the coordinate systems for each payload setting is described, while an error budget analysis is carried out to evaluate and identify the system’s critical elements along with the potential of the proposed method.Presented on
    corecore